Least-Squares Methods in Reinforcement Learning for Control

نویسندگان

  • Michail G. Lagoudakis
  • Ronald Parr
  • Michael L. Littman
چکیده

Least-squares methods have been successfully used for prediction problems in the context of reinforcement learning, but little has been done in extending these methods to control problems. This paper presents an overview of our research efforts in using least-squares techniques for control. In our early attempts, we considered a direct extension of the Least-Squares Temporal Difference (LSTD) algorithm in the spirit of Q-learning. Later, an effort to remedy some limitations of this algorithm (approximation bias, poor sample utilization) led to the LeastSquares Policy Iteration (LSPI) algorithm, which is a form of model-free approximate policy iteration and makes efficient use of training samples collected in any arbitrary manner. The algorithms are demonstrated on a variety of learning domains, including algorithm selection, inverted pendulum balancing, bicycle balancing and riding, multiagent learning in factored domains, and, recently, on two-player zero-sum Markov games and the game of Tetris.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Least-squares methods for policy iteration

Approximate reinforcement learning deals with the essential problem of applying reinforcement learning in large and continuous state-action spaces, by using function approximators to represent the solution. This chapter reviews least-squares methods for policy iteration, an important class of algorithms for approximate reinforcement learning. We discuss three techniques for solving the core, po...

متن کامل

Reinforcement Lernen mit Regularisierungsnetzwerken

This thesis aims at learning autonomously optimal behavior for high-dimensional control tasks using reinforcement learning with a kernel-based approach. Harnessing the representational power of kernelbased methods we hope to escape the so-called ’curse of dimensionality’, which otherwise implies an exponential growth in the number of basis functions. Specifically, we apply regularization networ...

متن کامل

A general fuzzified CMAC based reinforcement learning control for ship steering using recursive least-squares algorithm

Recursive least-squares temporal difference algorithm (RLS-TD) is deduced, which can use data more efficiently with fast convergence and less computational burden. Reinforcement learning based on recursive least-squares methods is applied to ship steering control, as provides an efficient way for the improvement of ship steering control performance. It removes the defect that the conventional i...

متن کامل

Efficient exploration through active learning for value function approximation in reinforcement learning

Appropriately designing sampling policies is highly important for obtaining better control policies in reinforcement learning. In this paper, we first show that the least-squares policy iteration (LSPI) framework allows us to employ statistical active learning methods for linear regression. Then we propose a design method of good sampling policies for efficient exploration, which is particularl...

متن کامل

An Algorithmic Survey of Parametric Value Function Approximation

Reinforcement learning is a machine learning answer to the optimal control problem. It consists in learning an optimal control policy through interactions with the system to be controlled, the quality of this policy being quantified by the so-called value function. A recurrent subtopic of reinforcement learning is to compute an approximation of this value function when the system is too large f...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2002